Goto

Collaborating Authors

 classification and regression rule


Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate

Neural Information Processing Systems

Many modern machine learning models are trained to achieve zero or near-zero training error in order to obtain near-optimal (but non-zero) test error. This phenomenon of strong generalization performance for ``overfitted'' / interpolated classifiers appears to be ubiquitous in high-dimensional data, having been observed in deep networks, kernel machines, boosting and random forests. Their performance is consistently robust even when the data contain large amounts of label noise. Very little theory is available to explain these observations. The vast majority of theoretical analyses of generalization allows for interpolation only when there is little or no label noise. This paper takes a step toward a theoretical foundation for interpolated classifiers by analyzing local interpolating schemes, including geometric simplicial interpolation algorithm and singularly weighted $k$-nearest neighbor schemes. Consistency or near-consistency is proved for these schemes in classification and regression problems.


Reviews: Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate

Neural Information Processing Systems

On further reflection and seeing the responses, I have substantially increased my score. I think the takeaway "interpolating classifiers/regressors need not overfit" is quite important, even if the algorithms studied here are fairly different from the ones people are actually concerned about. I would suggest re-emphasizing that this is the main point of the paper in the introduction, and additionally toning down some of the discussion about a "blessing of dimensionality" as mentioned in your response / below. This is related to the recent "controversy" in learning theory, brought to prominence by [43] and continued in [9, 41], that practical deep learning models (and some kernel-based models) lie far outside the regime of performance explained by current learning theory, for example having extremely high Rademacher complexity, and yet perform well in practice. This paper gives bounds for two particular nearest-neighbor-like models, which interpolate the training data and yet are argued to generalize well in high dimensions.


Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate

Belkin, Mikhail, Hsu, Daniel J., Mitra, Partha

Neural Information Processing Systems

Many modern machine learning models are trained to achieve zero or near-zero training error in order to obtain near-optimal (but non-zero) test error. This phenomenon of strong generalization performance for overfitted'' / interpolated classifiers appears to be ubiquitous in high-dimensional data, having been observed in deep networks, kernel machines, boosting and random forests. Their performance is consistently robust even when the data contain large amounts of label noise. Very little theory is available to explain these observations. The vast majority of theoretical analyses of generalization allows for interpolation only when there is little or no label noise.